On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization

نویسندگان

چکیده

Coordinate descent methods have considerable impact in global optimization because (or, at least, almost global) minimization is affordable for low-dimensional problems. with high-order regularized models smooth nonconvex box-constrained are introduced this work. High-order stationarity asymptotic convergence and first-order worst-case evaluation complexity bounds established. The computer work that necessary obtaining $\varepsilon$-stationarity respect to the variables of each coordinate-descent block $O(\varepsilon^{-(p+1)/p})$ whereas getting all simultaneously $O(\varepsilon^{-(p+1)})$. Numerical examples involving multidimensional scaling problems presented. numerical performance enhanced by means strategies choosing initial points.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization

The (optimal) function/gradient evaluations worst-case complexity analysis available for the Adaptive Regularizations algorithms with Cubics (ARC) for nonconvex smooth unconstrained optimization is extended to finite-difference versions of this algorithm, yielding complexity bounds for first-order and derivative free methods applied on the same problem class. A comparison with the results obtai...

متن کامل

Convergence of a Block Coordinate Descent Method for Nondifferentiable Minimization

We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f (x1 , . . . , xN ) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterates to a stationary point is shown when either f is pseudoconvex in every pair of coordinate ...

متن کامل

Solutions and Optimality Criteria to Box Constrained Nonconvex Minimization Problems

This paper presents a canonical duality theory for solving nonconvex polynomial programming problems subjected to box constraints. It is proved that under certain conditions, the constrained nonconvex problems can be converted to the so-called canonical (perfect) dual problems, which can be solved by deterministic methods. Both global and local extrema of the primal problems can be identified b...

متن کامل

Smooth minimization of nonsmooth functions with parallel coordinate descent methods

We study the performance of a family of randomized parallel coordinate descent methods for minimizing the sum of a nonsmooth and separable convex functions. The problem class includes as a special case L1-regularized L1 regression and the minimization of the exponential loss (“AdaBoost problem”). We assume the input data defining the loss function is contained in a sparse m× n matrix A with at ...

متن کامل

Low complexity secant quasi-Newton minimization algorithms for nonconvex functions

In this work some interesting relations between results on basic optimization and algorithms for nonconvex functions (such as BFGS and secant methods) are pointed out. In particular, some innovative tools for improving our recent secant BFGS-type and LQN algorithms are described in detail. © 2006 Elsevier B.V. All rights reserved. MSC: 51M04; 65H20; 65F30; 90C53

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Global Optimization

سال: 2022

ISSN: ['1573-2916', '0925-5001']

DOI: https://doi.org/10.1007/s10898-022-01168-6